-
-
Notifications
You must be signed in to change notification settings - Fork 168
feat: add python type generator #1011
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
new class is needed because typeddict uses NonRequired for missing attributes
Pull Request Test Coverage Report for Build 19441158062Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. In terms of release, we could make a canary release from this PR, setup some client-side tests (even if basics just to ensure no regressions and working types e2e) on the supabase-py repo.
(A bit like it's done for the TS types here: https://github.com/supabase/supabase-js/blob/master/packages/core/postgrest-js/test/resource-embedding.test.ts#L133-L152)
Idea would be to use the canary image to generate types for a test database, and have some basics runtime + types testing to see if it work as expected before getting this merged to production. WDYT ?
|
After a small modification to the |
|
@o-santi please remember to add the necessary changes in the package.json |
|
This looks good to me. I've tested it and the types it return looks correct. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, one last comment on the PR but nothing blocking.
Great work on this one ! Love the AST approach 👍
| }: GeneratorMetadata): string => { | ||
| const ctx = new PythonContext(types, columns, schemas) | ||
| const py_tables = tables | ||
| .filter((table) => schemas.some((schema) => schema.name === table.schema)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note
Was still curious about why this was necessary and it's indeed needed but only for the types that aren't filtered at introspection level. Everything else should already be filtered based on user requested schemas.
Made a PR with a proposal and test coverage for it there based on your branch: #1016
What is the current behavior?
Adds python type generation support, piggy backing off of @ryanpeach's #808 pull request. The main difference between his approach is that I added a lot more structure by first creating the "AST", then serializing it through the
serialize()method, instead of in place constructing strings everywhere.Fixes #795.
Additional context
The idea is that the normal
Public{table}class should be used to parse the result of select queries, while theInsertandUpdateobjects should be passed directly into the.insertand.updatepostgrestmethods. As such, they needed to beTypedDicts instead ofBaseModels, for that's what thepostgrestlibrary expects in those functions.This may change in the future, specially in a v3. Ideally, I'd love to be able to infer the type of the arguments for the function from the query builder itself, like TS already does, but I'm afraid that python's type checker is not strong enough to do that. I think a frankenstein can be stitched together from
Literals and@overloads but not sure how scalable and user friendly that would be. I will certainly research more about it before doing a v3 rewrite. For now, this approach suffices.